abuse imagery
UK seeking to curb AI child sex abuse imagery with tougher testing
The UK government will allow tech firms and child safety charities to proactively test artificial intelligence tools to make sure they cannot create child sexual abuse imagery. An amendment to the Crime and Policing Bill announced on Wednesday would enable authorised testers to assess models for their ability to generate illegal child sexual abuse material (CSAM) prior to their release. Technology Secretary Liz Kendall said the measures would ensure AI systems can be made safe at the source - though some campaigners argue more still needs to be done. It comes as the Internet Watch Foundation (IWF) said the number of AI-related CSAM reports had doubled over the past year. The charity, one of only a few in the world licensed to actively search for child abuse content online, said it had removed 426 pieces of reported material between January and October 2025.
- North America > United States (0.50)
- South America (0.15)
- North America > Central America (0.15)
- (13 more...)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.35)
Sex offender banned from using AI tools in landmark UK case
A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any "AI creating tools" for the next five years in the first known case of its kind. Anthony Dover, 48, was ordered by a UK court "not to use, visit or access" artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February. The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and "nudifying" websites used to make explicit "deepfakes". Dover, who was given a community order and 200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court. The case is the latest in a string of prosecutions where AI generation has emerged as an issue and follows months of warnings from charities over the proliferation of AI-generated sexual abuse imagery.
Apple gives more detail on new iPhone photo scanning feature as controversy continues
Apple has released yet more details on its new photo-scanning features, as the controversy over whether they should be added to the iPhone continues. Earlier this month, Apple announced that it would be adding three new features to iOS, all of which are intended to fight against child sexual exploitation and the distribution of abuse imagery. One adds new information to Siri and search, another checks messages sent to children to see if they might contain inappropriate images, and the third compares photos on an iPhone with a database of known child sexual abuse material (CSAM) and alerts Apple if it is found. It is the latter of those three features that has proven especially controversial. Critics say that the feature is in contravention of Apple's commitment to privacy, and that it could in the future be used to scan for other kinds of images, such as political pictures on the phones of people living in authoritarian regimes.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.37)
- Information Technology > Communications > Mobile (0.84)
- Information Technology > Artificial Intelligence (0.59)